Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

MongoDB Atlas With Terraform: Database Users and Vault

SM
Samuel Molling8 min read • Published Apr 15, 2024 • Updated Apr 15, 2024
TerraformAtlas
FULL APPLICATION
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
In this tutorial, I will show how to create a user for the MongoDB database in Atlas using Terraform and how to store this credential securely in HashiCorp Vault. We saw in the previous article, MongoDB Atlas With Terraform - Cluster and Backup Policies, how to create a cluster with configured backup policies. Now, we will go ahead and create our first user. If you haven't seen the previous articles, I suggest you look to understand how to get started.
This article is for anyone who intends to use or already uses infrastructure as code (IaC) on the MongoDB Atlas platform or wants to learn more about it.
Everything we do here is contained in the provider/resource documentation:
Note: We will not use a backend file. However, for productive implementations, it is extremely important and safer to store the state file in a remote location such as S3, GCS, Azurerm, etc.

Creating a User

At this point, we will create our first user using Terraform in MongoDB Atlas and store the URI to connect to my cluster in HashiCorp Vault. For those unfamiliar, HashiCorp Vault is a secrets management tool that allows you to securely store, access, and manage sensitive credentials such as passwords, API keys, certificates, and more. It is designed to help organizations protect their data and infrastructure in complex, distributed IT environments. In it, we will store the connection URI of the user that will be created with the cluster we created in the last article.
Before we begin, make sure that all the prerequisites mentioned in the previous article are properly configured: Install Terraform, create an API key in MongoDB Atlas, and set up a project and a cluster in Atlas. These steps are essential to ensure the success of creating your database user.

Configuring HashiCorp Vault to run on Docker

The first step is to run HashiCorp Vault so that we can test our module. It is possible to run Vault on Docker Local. If you don't have Docker installed, you can download it. After downloading Docker, we will download the image we want to run — in this case, from Vault. To do this, we will execute a command in the terminal docker pull vault:1.13.3 or download using Docker Desktop.
Looking for the image in docker
Now, we will create a container from this image. Click on the image and click on Run. After this, a box will open where we only need to map the port from our computer to the container. In this case, I will use port 8200 which is the Vault's default port. Click Run.
Screen to configure docker port
The container will start running. If we go to our browser and enter the URL localhost:8200/, the Vault login screen will appear.
Vault sign in screen
To access the Vault, we will use the Root Token that is generated when we create the container.
Log screen containing token
Now, we will log in. After opening, we will create a new KV-type engine just to illustrate it a little better. Click Secrets Engines -> Enable new Engine -> Generic KV and click Next.
Secrets engine selection screen
In Path, put kv/my_app and click on Enable Engine. Now, we have our Vault configured and working.

Terraform provider configuration for MongoDB Atlas and HashiCorp Vault

The next step is to configure the Terraform provider. This will allow Terraform to communicate with the MongoDB Atlas and Vault API to manage resources. Add the following block of code to your providers.tf file:
1provider "mongodbatlas" {}
2provider "vault" {
3 address = "http://localhost:8200"
4 token = "hvs.brmNeZd31NwEmyky1uYI2wvY"
5 skip_child_token = true
6}
In the previous article, we configured the Terraform provider by placing our public and private keys in environment variables. We will continue in this way. We will add a new provider, the Vault. In it, we will configure the Vault address, the authentication token, and the skip_child_token parameter so that we can authenticate to the Vault.
Note: It is not advisable to specify the authentication token in a production environment. Use one of the authentication methods recommended by HashiCorp, such as app_role. You can evaluate the options in Terraform’s docs

Creating the Terraform version file

The version file continues to have the same purpose, as mentioned in other articles, but we will add the version of the Vault provider as something new.
1terraform {
2 required_version = ">= 0.12"
3 required_providers {
4 mongodbatlas = {
5 source = "mongodb/mongodbatlas"
6 version = "1.14.0"
7 }
8 vault = {
9 source = "hashicorp/vault"
10 version = "4.0.0"
11 }
12 }
13}

Defining the database user and vault resource

After configuring the version file and establishing the Terraform and provider versions, the next step is to define the user resource in MongoDB Atlas. This is done by creating a .tf file — for example, main.tf — where we will create our module. As we are going to make a module that will be reusable, we will use variables and default values so that other calls can create users with different permissions, without having to write a new module.
1# ------------------------------------------------------------------------------
2# RANDOM PASSWORD
3# ------------------------------------------------------------------------------
4resource "random_password" "default" {
5 length = var.password_length
6 special = false
7}
8
9# ------------------------------------------------------------------------------
10# DATABASE USER
11# ------------------------------------------------------------------------------
12resource "mongodbatlas_database_user" "default" {
13 project_id = data.mongodbatlas_project.default.id
14 username = var.username
15 password = random_password.default.result
16 auth_database_name = var.auth_database_name
17
18 dynamic "roles" {
19 for_each = var.roles
20 content {
21 role_name = try(roles.value["role_name"], null)
22 database_name = try(roles.value["database_name"], null)
23 collection_name = try(roles.value["collection_name"], null)
24 }
25 }
26
27 dynamic "scopes" {
28 for_each = var.scope
29 content {
30 name = scopes.value["name"]
31 type = scopes.value["type"]
32 }
33 }
34
35
36 dynamic "labels" {
37 for_each = local.tags
38 content {
39 key = labels.key
40 value = labels.value
41 }
42 }
43}
44
45resource "vault_kv_secret_v2" "default" {
46 mount = var.vault_mount
47 name = var.secret_name
48 data_json = jsonencode(local.secret)
49}
At the beginning of the file, we have the random_password resource that is used to generate a random password for our user. In the mongodbatlas_database_user resource, we will specify our user details. We are placing some values as variables as done in other articles, such as name and auth_database_name with a default value of admin. Below, we create three dynamic blocks: roles, scopes, and labels. For roles, it is a list of maps that can contain the name of the role (read, readWrite, or some other), the database_name, and the collection_name. These values can be optional if you create a user with atlasAdmin permission, as in this case, it does not. It is necessary to specify a database or collection, or if you wanted, to specify only the database and not a specific collection. We will do an example. For the scopes block, the type is a DATA_LAKE or a CLUSTER. In our case, we will specify a cluster, which is the name of our created cluster, the demo cluster. And the labels serve as tags for our user.
Finally, we define the vault_kv_secret_v2 resource that will create a secret in our Vault. It receives the mount where it will be created and the name of the secret. The data_json is the value of the secret; we are creating it in the locals.tf file that we will evaluate below. It is a JSON value — that is why we are encoding it.
In the variable.tf file, we create variables with default values:
1variable "project_name" {
2 description = "The name of the Atlas project"
3 type = string
4}
5
6variable "cluster_name" {
7 description = "The name of the Atlas cluster"
8 type = string
9}
10
11variable "password_length" {
12 description = "The length of the password"
13 type = number
14 default = 20
15}
16
17variable "username" {
18 description = "The username of the database user"
19 type = string
20}
21
22variable "auth_database_name" {
23 description = "The name of the database in which the user is created"
24 type = string
25 default = "admin"
26}
27
28variable "roles" {
29 description = <<HEREDOC
30 Required - One or more user roles blocks.
31 HEREDOC
32 type = list(map(string))
33}
34
35variable "scope" {
36 description = "The scopes to assign to the user"
37 type = list(object({
38 name = string
39 type = string
40 }))
41 default = []
42}
43
44variable "labels" {
45 type = map(any)
46 default = null
47 description = "A JSON containing additional labels"
48}
49
50variable "uri_options" {
51 type = string
52 default = "retryWrites=true&w=majority&readPreference=secondaryPreferred"
53 description = "A string containing additional URI configs"
54}
55
56variable "vault_mount" {
57 description = "The mount point for the Vault secret"
58 type = string
59}
60
61variable "secret_name" {
62 description = "The name of the Vault secret"
63 type = string
64}
65
66variable "application" {
67 description = <<HEREDOC
68 Optional - Key-value pairs that tag and categorize the cluster for billing and organizational purposes.
69 HEREDOC
70 type = string
71}
72
73variable "environment" {
74 description = <<HEREDOC
75 Optional - Key-value pairs that tag and categorize the cluster for billing and organizational purposes.
76 HEREDOC
77 type = string
78}
We configured a file called locals.tf with the values for our Vault and the tags that were created, like the last article. The interesting thing here is that we are defining how our user's connection string will be assembled and saved in the Vault. We could only save the username and password too, but I personally prefer to save the URI. This way, I can specify some good practices such as defining connection tags, such as readPreference, without depending on the developer to place it in the application. In the code below, there are some text treatments so that the URI is correct. At the end, I create a variable called secret that has a URI key and receives the value of the created URI.
1locals {
2 private_connection_srv = data.mongodbatlas_advanced_cluster.default.connection_strings.0.standard_srv
3 cluster_uri = trimprefix(local.private_connection_srv, "mongodb+srv://")
4 private_connection_string = "mongodb+srv://${mongodbatlas_database_user.default.username}:${random_password.default.result}@${local.cluster_uri}/${var.auth_database_name}?${var.uri_options}"
5
6 secret = { "URI" = local.private_connection_string }
7
8 tags = {
9 name = var.application
10 environment = var.environment
11 }
12}
In this article, we adopt the use of data sources in Terraform to establish a dynamic connection with existing resources, such as our MongoDB Atlas project and our cluster. Specifically, in the data.tf file, we define a data source, mongodbatlas_project and mongodbatlas_advanced_cluster, to access information about an existing project and cluster based on its name:
1data "mongodbatlas_project" "default" {
2 name = var.project_name
3}
4
5
6data "mongodbatlas_advanced_cluster" "default" {
7 project_id = data.mongodbatlas_project.default.id
8 name = var.cluster_name
9}
Finally, we define our variables file, terraform.tfvars:
1project_name = "project-test"
2username = "usr_myapp"
3application = "teste-cluster"
4environment = "dev"
5cluster_name = "cluster-demo"
6
7roles = [{
8 "role_name" = "readWrite",
9 "database_name" = "db1",
10 "collection_name" = "collection1"
11 }, {
12 "role_name" : "read",
13 "database_name" : "db2"
14}]
15
16scope = [{
17 name = "cluster-demo",
18 type = "CLUSTER"
19}]
20
21secret_name = "MY_MONGODB_SECRET"
22vault_mount = "kv/my_app"
These values defined in terraform.tfvars are used by Terraform to populate corresponding variables in your configuration. In it, we are specifying the user's scope, values for the Vault, and our user's roles. The user will have readWrite permission on db1 in collection1 and read permission on db2 in all collections for the demo cluster.
The file structure is as follows:
  • main.tf: In this file, we will define the main resource, the mongodbatlas_database_user and vault_kv_secret_v2, along with a random password generation. Here, you have configured the cluster and backup routines.
  • provider.tf: This file is where we define the provider we are using, in our case, mongodbatlas and Vault.
  • terraform.tfvars: This file contains the variables that will be used in our module — for example, the user name and Vault information, among others.
  • variable.tf: Here, we define the variables mentioned in the terraform.tfvars file, specifying the type and, optionally, a default value.
  • version.tf: This file is used to specify the version of Terraform and the providers we are using.
  • data.tf: Here, we specify a datasource that will bring us information about our project and created cluster. We will search for its name and for our module, it will give us the project ID and cluster information such as its connection string.
  • locals.tf: We specify example tags to use in our user and treatments to create the URI in the Vault.
Now is the time to apply. =D
We run a Terraform init in the terminal in the folder where the files are located so that it downloads the providers, modules, etc…
Note: Remember to export the environment variables with the public and private key.
1export MONGODB_ATLAS_PUBLIC_KEY="your_public_key"
2export MONGODB_ATLAS_PRIVATE_KEY=your_private_key"
Now, we run init and then plan, as in previous articles.
We assess that our plan is exactly what we expect and run the apply to create it.
When running the terraform apply command, you will be prompted for approval with yes or no. Type yes.
Now, let's look in Atlas to see if the user was created successfully...
User displayed in database access
Access permissions displayed
Let's also look in the Vault to see if our secret was created.
MongoDB secret URI
It was created successfully! Now, let's test if the URI is working perfectly.
This is the format of the URI that is generated: mongosh "mongodb+srv://usr_myapp:<password>@<clusterEndpoint>/admin?retryWrites=true&majority&readPreference=secondaryPreferred"
Mongosh login
We connect and will make an insertion to evaluate whether the permissions are adequate — initially, in db1 in collection1.
Command to insert to db and acknowledged
Success! Now, in db3, make sure it will not have permission in another database.
Access denied to unauthroized collection Excellent — permission denied, as expected.
We have reached the end of this series of articles about MongoDB. I hope they were enlightening and useful for you!
To learn more about MongoDB and various tools, I invite you to visit the Developer Center to read the other articles.
Top Comments in Forums
There are no comments on this article yet.
Start the Conversation

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Quickstart

MongoDB Atlas Search Integration with BuildShip


Oct 15, 2024 | 4 min read
Code Example

EHRS-Peru


Sep 11, 2024 | 3 min read
Tutorial

Build Smart Applications With Atlas Vector Search and Google Vertex AI


Sep 18, 2024 | 4 min read
Article

Realm Triggers Treats and Tricks - Document-Based Trigger Scheduling


Sep 09, 2024 | 5 min read
Table of Contents